1,491 research outputs found

    DSIM: A distributed simulator

    Get PDF
    Discrete event-driven simulation makes it possible to model a computer system in detail. However, such simulation models can require a significant time to execute. This is especially true when modeling large parallel or distributed systems containing many processors and a complex communication network. One solution is to distribute the simulation over several processors. If enough parallelism is achieved, large simulation models can be efficiently executed. This study proposes a distributed simulator called DSIM which can run on various architectures. A simulated test environment is used to verify and characterize the performance of DSIM. The results of the experiments indicate that speedup is application-dependent and, in DSIM's case, is also dependent on how the simulation model is distributed among the processors. Furthermore, the experiments reveal that the communication overhead of ethernet-based distributed systems makes it difficult to achieve reasonable speedup unless the simulation model is computation bound

    A study of the relationship between the performance and dependability of a fault-tolerant computer

    Get PDF
    This thesis studies the relationship by creating a tool (FTAPE) that integrates a high stress workload generator with fault injection and by using the tool to evaluate system performance under error conditions. The workloads are comprised of processes which are formed from atomic components that represent CPU, memory, and I/O activity. The fault injector is software-implemented and is capable of injecting any memory addressable location, including special registers and caches. This tool has been used to study a Tandem Integrity S2 Computer. Workloads with varying numbers of processes and varying compositions of CPU, memory, and I/O activity are first characterized in terms of performance. Then faults are injected into these workloads. The results show that as the number of concurrent processes increases, the mean fault latency initially increases due to increased contention for the CPU. However, for even higher numbers of processes (less than 3 processes), the mean latency decreases because long latency faults are paged out before they can be activated

    Design for dependability: A simulation-based approach

    Get PDF
    This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced

    Modeling for the Modified Atmosphere Packaging of Sapota Fruit

    Get PDF
    Modified atmospheric packaging utilizes polymeric films with selective permeability for O2 & CO2 and modifies the storage environment by lowering O2 concentration and increasing CO2 concentration. Modeling and design of the Sapota fruit for modified atmosphere storage was done with different packaging materials for the higher shelf life. Based on the respiration rate and permeability of the packaging material the suitable packaging films obtained for the modified atmospheric packaging of Sapota fruit were low density Polyethylene, Polyvinyl chloride, Polypropylene and Polystyrene film. Saran and Polyester films were found to be unsuitable for MAP storage of the Sapota fruit. The developed model provides an effective way for selection of the proper packaging material for Sapota fruit and this model can also be employed for designing packaging films for other fruits and vegetables

    Measurement of Radiation Exposure of Hands and Fingers of Nuclear Medicine Personnel Using Innovative Low-Cost GM Based System

    Get PDF
    Pocket dosimeters are used commonly in Nuclear Medicine or radiology workers for daily dose measurement or during certain high radiation exposure procedures for short periods of time. These are expensive and have many maintenance costs. It may not be feasible to procure many at a time due to cost restraints. We designed a low cost remote dose monitoring device for measurement of radiation for short periods of time. Measuring device was constructed using micro Geiger Muller Counter. The micro-Geiger Counter with model no SBM21 was connected to high voltage circuit and the output signal was obtained. After this, output signal was sent it to Arduino circuit running custom code ESP8266 and readings was made on the mobile/ laptop application after connecting to Arduino circuit running custom code ESP8266 via internet. Readings were taken for short durations during certain periods. Readings were compared to the standard measuring instrument simultaneously. The tube was connected to their hands for hand exposure with a wire. Readings were recorded in the form of μSv/h or mSv/h. RESULT: Total readings were taken during various occasions while preparing (59) and dispensing (45) radioisotopes. During preparation of radiopharmaceuticals, average readings for 99mTc-Sestamibi[16 SAMPLE] (100mCi to 250mCi), 99mTc-MDP [21SAMPLE] (150-200mCi) and 99mTc-DTPA [22SAMPLES] (10-20mCi) were 1115 μSv/h, 3083 μSv/h and 453 μSv/h respectively. During dispensing of radiopharmaceuticals, average readings of total 15 sample each for 99mTc-Sestamibi (100mCi to 250mCi), 99mTc-MDP (150-200mCi) and 99mTc-DTPA (10-20mCi) were 351μSv/h, 209 μSv/h and 235 μSv/h respectively. It was feasible for the worker to fit the measuring device in their pocket while working. Conclusion: Our pilot study revealed accurate readings >95% of the times for short periods of time. Study designing, standardization of readings with more real time accuracy can make this method more reliable and is a low cost means to monitor short periods of radiation exposure for radiation workers.&nbsp
    corecore